Goto

Collaborating Authors

 monotonically non-decreasing


Improving forecasting by learning quantile functions

#artificialintelligence

The quantile function is a mathematical function that takes a quantile (a percentage of a distribution, from 0 to 1) as input and outputs the value of a variable. It can answer questions like, "If I want to guarantee that 95% of my customers receive their orders within 24 hours, how much inventory do I need to keep on hand?" As such, the quantile function is commonly used in the context of forecasting questions. In practical cases, however, we rarely have a tidy formula for computing the quantile function. Instead, statisticians usually use regression analysis to approximate it for a single quantile level at a time.


Latest AI Research at Amazon Improves Forecasting by Learning the Quantile Functions

#artificialintelligence

'The quantile function is a mathematical function that takes a quantile (a percentage of a distribution ranging from 0 to 1) as an input and returns the value of a variable as an output.' It can answer queries such as, "How much inventory do I need to maintain on hand if I want to guarantee that 95 percent of my customers receive their orders within 24 hours?" As a result, the quantile function is frequently utilized in forecasting questions. However, in practice, there is rarely a neat method for computing the quantile function. That means that if you want to compute it for a different quantile, you'll need to create a new regression model, which nowadays usually entails retraining a neural network.


Optimizing Generalized Rate Metrics through Game Equilibrium

Narasimhan, Harikrishna, Cotter, Andrew, Gupta, Maya

arXiv.org Machine Learning

We present a general framework for solving a large class of learning problems with non-linear functions of classification rates. This includes problems where one wishes to optimize a non-decomposable performance metric such as the F-measure or G-mean, and constrained training problems where the classifier needs to satisfy non-linear rate constraints such as predictive parity fairness, distribution divergences or churn ratios. We extend previous two-player game approaches for constrained optimization to a game between three players to decouple the classifier rates from the non-linear objective, and seek to find an equilibrium of the game. Our approach generalizes many existing algorithms, and makes possible new algorithms with more flexibility and tighter handling of non-linear rate constraints. We provide convergence guarantees for convex functions of rates, and show how our methodology can be extended to handle sums of ratios of rates. Experiments on different fairness tasks confirm the efficacy of our approach.


The Social Cost of Strategic Classification

Milli, Smitha, Miller, John, Dragan, Anca D., Hardt, Moritz

arXiv.org Machine Learning

As machine learning increasingly supports consequential decision making, its vulnerability to manipulation and gaming is of growing concern. When individuals learn to adapt their behavior to the specifics of a statistical decision rule, its original predictive power will deteriorate. This widely observed empirical phenomenon, known as Campbell's Law or Goodhart's Law, is often summarized as: "Once a measure becomes a target, it ceases to be a good measure" [25]. Institutions using machine learning to make high-stakes decisions naturally wish to make their classifiers robust to strategic behavior. A growing line of work has sought algorithms that achieve higher utility for the institution in settings where we anticipate a strategic response from the the classified individuals [10, 5, 14]. Broadly speaking, the resulting solution concepts correspond to more conservative decision boundaries that increase robustness to some form of covariate shift.